Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Proper file IO in HASCAL

Name: Onanymous 2014-12-10 10:27

I was writing some text file processing in Pythong using generators, you know, the usual stuff, lazily read lines, parse each line, lazily perform some grouping, etc. Then I realized that I have a problem, two requirements that don't play well with each other:

1. On one hand, I want to process lines lazily so that at any point I use just as much memory as I need to process one line or one group of lines. I don't want to intentionally or accidentally get the entire file (or any intermediate dataset) entirely into memory.

2. On the other hand, I want to deterministically close the file after I'm done reading it.

As a result I should be careful to force all my lazy stuff before I return from with open(fname, 'r') as f: .... And it's really not obvious that something is wrong with the code that doesn't do that (apart from getting "I/O operation on closed file" at runtime, of course).

How does HASCAL master race solve this problem that looms even more menacingly for them, with laziness being the default and pervasive? Or, as quick googling makes it seem, they don't stoop to such pleb things as reading files efficiently and closing them at the right time?

Name: Anonymous 2014-12-10 13:45

>>1
This is a problem Haskell people used to talk about a lot, and then they developed something called "iteratees" (like inside-out iterators) so they could do lazy streaming and still properly close the file as soon as it's been read. I think there was also something else that came after that, but I don't remember what it was called.

Name: Anonymous 2014-12-10 13:49

>>5
Pipes and conduit.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List