Date: Tue, 15 Aug 1995 16:38:20 -0700 From: zosel@phoenix.ocf.llnl.gov (Mary E Zosel) To: hpff@cs.rice.edu Subject: July HPFF Meeting Minutes High Performance Fortran Forum Meeting July 26-28, Denver Colorado Record of Action: Mary Zosel Executive Summary The July meeting of HPFF was attended by 25 people from 21 institutions. In the reports from the three subgroups, several new language proposals were outlined, including proposals for asynchronous I/O, an generalized block mapping, shadow specification, and an irregular mapping specification. These were all treated as a first proposal reading. A second group of presentations were given for initial group feedback, including an ON HOME mechanism for directing processor work, generalized loop reductions, an interface mechanism for calling C programs, a proposal for (a restricted form of) task subprograms, a proposal for additional requirements for explicit interfaces that would simplify compiler subroutine call interfaces, and the notion of creating a kernel part of the language which would help the user stay away from features that are more costly at performance time. Additional work for proposals in these areas are expected at the September and November meetings. The schedule of work for the next three meetings was drafted. Feedback to X3J3 about the draft F95 and a SC95 BOF were discussed. The next HPFF meeting is schedule for the Dallas area September 20-22. End of Executive Summary ____________________________ Detailed Record of Action July 26: Subgroup meetings chaired by Rob Schreiber, David Loveman, and Piyush Mehrota were held from 1:30 through the evening. ------- July 27: Ken Kennedy called the meeting to order at 8:45. Introductions and the initial count of installations were made. Twenty five people 21 institutions were present. In the review of the vendor list, CCC and ACSET were removed because they no longer exist. There was a query about the status of the effort of an Edinburgh group. Ken will check this. The current vendor implementation list is: Announced Products Applied Parallel Research, Digital, Hitachi, Intel, Meiko, Motorola, NA Software, NEC, Pacific Sierra Research, The Portland Group, Inc. (PGI), SofTech Announced Efforts ACE, Convex, Fujitsu, IBM , Lahey, MasPar, NAG, nCube, Thinking Machines Interested Cray Research, Edinburgh Portable Compilers, HP, Silicon Graphics, Sun ---------- There was a request to post information about the meetings in more places. Barbara Chapman reported that she is looking for funding for additional European participation. David Loveman began the subgroup reports with an overview of the subgroup E (external issues) discussions. Subgroup E meetings addressed a proposal for HPF kernel, a mechanisms to make interoperability between HPF and C/C++ easier, the role of the subset in HPF version 2, whether some features should be removed from the language, a list of tool requirements, and the problems associated with multilevel distributions when an arbitrary configuration involving clusters of SMPs and possibly workstations might exist. The goal of a kernel would be: new code, written well, should perform well high-performance across all platforms no "performance surprises" no runtime overhead no requirement for interprocedural analysis simplified user model (easily understood and taught, commonly used, and valuable for all platforms) For interoperability, issues discussed were: HPF calling C (C++) compiler convert all "basic" types X Windows library as a typical target define a module C_INTERFACE with definitions of Fortran 90 KINDs for C types EXTRINSIC(C) C (C++) calling HPF mapping of function (and module function) names parameter passing pointers structures An ISO group associated with WG5 is also discussing this. Initial contact was made with the group, but the leadership is in the process of changing. The role of the HPF subset was discussed, with the general feeling of the subgroup that by 1996, it will have outlived its usefulness. Similarly, the group feels that it is possible that there is reason to consider removing some features from HPF. (This being a stronger language statement than definition of features that are not in the kernel.) Finally David reviewed the subgroup discussion of evolving parallel architecture models and what it means for HPF applications. For example, if 2 SMP "boxes" with 4 processors each are used for an application, what does number_of_processors return? 2 or 8 or (/2,4/)? A variety of similar questions can be constructed when a hierarchy/mixture of SMP's workstations, and MPP's are connected as a single application. The question was asked about how much of this is vendor specific. Rob Schreiber commented that this is at least a good CCI question, and Ken added that CCI is the shallowest way to look at it - we should ask the most general questions and then ask what should be in HPF2. Scott Baden next presented more information about the subgroup E kernel discussion. The goals he listed were: make the common cases go fast restricted model - simplified index calculations reduces check for realignment, etc. simpler compilation simplified programmer's model He noted that the kernel would probably grow as new features are added to HPF. The idea from the subgroup presented for consideration was that this should be a new extrinsic kind of HPF. A straw poll about the idea of having a kernel was taken, with the group expressing general support: 15 - 7 - 1. But the idea of using an extrinsic for this purpose was not considered a good idea. The next item from Subgroup E was the issue of possible definition of external interfaces from HPF to other tools such as debuggers or performance tools. Mary presented some of the requirements one performance tool builder had generated. During the break, other features were added - to get the beginnings of a requirements list, both from the user point of view and the tool-builder point of view. Things a tool builder would like to see: - Mechanism to match line numbers with source - Who are the receives from, what data, from whom, etc. - Mechanism to track asynchronous messages - The communication used may actually use some "down to metal" messaging rather than the user-level message packages that performance tools know how to trace. How will this be instrumented? - Need trace of scalar info as well as arrays. - Separate compilation issues. - Static vs dynamic trace? - Performance tool needs distribution info for visualization this may involve some variable length trace records because of multiple dimensions. - Unbalanced and irregular mappings (funny #'s of items per proc, etc.) - Routine entry remapping. (Name changes.) - Way to show address computation overhead. Things a user would like to see: - Way to verify replicated data is consistent. - One tool for both tuning and debugging - Per source-line profiling - Total CPU-time per processor - Message volume per processor and per processor pair - Viewing global arrays in various text and graphics formats - Viewing array sections - Viewing array-valued expressions, including things like minval(A) Ken said that there was discussion of this at a recent ARPA PI meeting and agreement that there should be a standard for these interfaces, but he believes it is outside the scope of HPF2 to do a formal standard. Barbara Chapman pointed out that there have also been European discussions on this topic that would be useful to coordinate. Break The next agenda item was a report from Piyush Mehrotra from Subgroup D activities. Several CCI results were presented, followed by specific HPF2 proposals that were considered in their first reading. CCI #11 Currently pointers cannot be mapped - they acquire the mapping of the target object. The proposal is that conforming dimensions of the pointer object and target either must be (unmapped or both be identically distributed) or both must be sequential. The issue is ... does the pointer exist as an instance by itself, or is it just associated with its bound arg.There is a case of pointers to sections of arrays, so just knowing that a pointer goes a block distributed thing, one can't talk about pointers to section. Andy Meltzer recalls that was related to the fact that allocatable objects don't have their distribution until after they are assigned. A straw poll was taken with the special understanding that a substantial vote for "abstain" would mean reconsider. The vote was 7-3-10, so this CCI item is returned to committee for further clarification of the issue. CCI #12 In order to follow F90: bounds of array variables should be declared before they are mapped (subgroup vote: 4-1-2). !illegal !hpf$ align a(:) with b(:) real a(100), b(100) !hpf$ distribute b(block) The full group rejected this restriction by a vote of 6 - 9 - 6. #27 HPF directives should be ordered such that a) processor arrangements are declared before use b) align targets are distributed before use in align statements (subgroup vote: 0-3-4 - preference for #12 rather than #27). In favor of #12 6 - 9 - 6 institutional vote In favor of #27 ... (want ordering to count, both a and b) 3 - 12 - 5 CCI#13 Consider the example: module mod1 integer ::i end module mod1 module mod2 use mod1 !hpf$ processors p (i, number_of_processors()/i) end module mod2 Processor arrangement extent can change! Module array variables must use constant spec. expressions. Proposal: Restrict bounds of processor arrangements in modules and main program to use number_of_processors and constants. Henry Zongaro has specific text. institutional vote: 17 - 1 - 3 CCI #23: There is confusion in the text and syntax between object-name, template-name and processor-name. Proposal: Delete H333 template-name is object-name H336 processor-name is object-name (These are names, but not object-names - see align target rule H321.) Approved: 17 - 0 - 3 CCI#28 - New item submitted by Larry Meadows, just before meeting: The syntax of combined directives allows: ALIGN WITH A(*,:) :: B(:) Is this a shape-spec-list for array-decl or array-spec for alignment? It is confusing to users. Proposal: Add syntax rules to allow shape-list in combined directives only with template name and processor names. The full group requested that the subgroup come back with a more specific proposal. CCI#24 Question: Why are variables in module allowed to be dynamic whereas common block covers and saved variables are not? The subcommittee considered two different proposals: Allow common covers and saved variables to be dynamic 0-5-2 Restrict module variables to be static 6 - 0 -1 In general group discussion, the recollection was that there was a reason for restricting the redistribution in common blocks, and also that we were not going to make special efforts for language features that are the "old" way of doing things. Both proposals were rejected by a vote of 3-10-7. -------- Three new proposals from Group D were presented. --------- Proposal for generalizing block distributions: dist-format is BLOCK [(n)] or BLOCK [(int-array)] .... Constraint: must be a one-dimensional integer array of size equal to the extent of the corresponding dimension of the target processor arrangement. Constraint: The sum of the values of int-array must match the size of the dimension of the array being distributed. All values must be non-negative. Example: parameter (s=/2,25,10,8,55/) !hpf$ processors p(5) real a(100) !hpf$ distribute block (s) onto p :: a P1 P2 P3 P4 P5 1 2 | 3...27 | 28...37 | 38...45 | 46...100 - Dimensional - Can be used for dynamic distributions - Requires maintaining a table for looking up extents Motivations include allowing for physical boundaries and adjusting sizes of blocks for PIC codes. Barbara points out that with a small representation that is fairly efficient, table look up can be used instead of computation for first element. Carl Offner suggested maybe this feature should be in an extended HPF, but not in the "real" HPF. Rob Schreiber commented that this can be handled by the "mapped" mapping, but that this might be much easier to compile. There was some discussion about whether this was a research extension or part of the language. Rob proposed that the name of the next report is HPF version 2.0, not HPF2. Also that we should continue to have a JOD. Ken requested that Subgroup E loop at the shape of the report and make some kind of proposal that addresses questions about kernel language, extended language, JOD, research language, etc. Back to the generalized block (leaving "what part of language" until later) a straw poll about this proposal as a first reading was taken - and supported 17- 2-4. --------- Proposal for irregular or generalized mapping using an address array: dist-format is BLOCK ... .... or INDIRECT (int-array) Constraint: must be a one-dimensional integer array of size equal to the extent of the corresponding dimension of the array being distributed. A semantic constraint on values of map array is needed. Example: !HPF$ processor P(4) real a(100) integer map (100) !hpf$ dynamic a map = !hpf$ redistribute a(INDIRECT(map)) - dimensional - changing "map" does not change distribution - requires maintaining internal address array as big as the array being distributed => address array will be distributed => communication to find out location of elements. Question about the feature included how this plays with lower bound of arrays in the case where there is an explicit onto. Various people commented that there has been lots of experience with this feature and many optimizations still possible. Most of the work has been in the context of distributed memory machines, not SMP's . A first reading straw poll for irregular mapping was taken: 14 - 5 - 5. Rob proposed that the mapping be liberalized - so that a function is allowed This is useful where indirect array is too large to replicate, so it computes something for the mapping - e. g. a pure int function of one int that might access replicated data. A straw poll supported the idea of this functionality: 14 - 3- 8. ---------- Last subgroup D proposal - not yet first reading Proposal: Remove the constraint on distribution subobjects of derived types. Add the constraint A component of a derived type can be mapped if and only if none of its subcomponents are mapped. [ This is the rough idea, but exact statement has not yet been worked out.] Example: type W real a(100) !hpf$ distribute a(block) end type W type W1 real c(100) end type W1 type V real b(100) !hpf$ distribute b(block) type(W1) d(100) !hpf$ distribute d(block) type (W) e(100) ! Cannot be distributed end type V type (V) :: f(100) ! cannot be distributed Some the issues come up with pointers - e.g. for tree structures - can these be distributed? Can components of a structure in a module be private? Is sufficient to just have pointers that refer to components? It was noted that in this context, "mapped" really means "explicitly mapped". ------------- In the time remaining before the lunch break, Carl Offner presented an idea for allowing a user to give advice to the compiler about the depth of shadow cells in various dimensions of a distribution. It was noted that there are some questions about how this behaves across procedure boundaries. Various forms of a directive were suggested, allowing the user to specify an additional argument in a block or cyclic specification, and/or to use a keyword "shadow". The following variations all express similar advice for a shadow of depth "w". !hpf$ distribute (block (n, w)) :: A !hpf$ distribute (block (n, shadow=w)) :: A !hpf$ distribute (block (shadow=w)) :: A !hpf$ distribute (cyclic (n, w)) :: A !hpf$ distribute (cyclic (n, shadow=w)) :: A !hpf$ distribute (cyclic (shadow=w)) :: A The straw poll about having a proposal along this line: 15-4-6. BREAK FOR LUNCH The group reconvened at 1:20. Andy Meltzer - presented the specific details about a proposed C interface. Following is a shortened version of his presentation giving a very simple example: MODULE C_INTERFACE ! supplied by vendor integer, parameter :: c_int = 8 ! from a 64 bit machine integer, parameter :: c_short = 4 integer, parameter :: c_long = 8 integer, parameter :: c_float = 8 ... END MODULE C_INTERFACE The user is required to give an explicit interface for a C call - which might look like the following: INTERFACE extrinsic (C) function cfunc (x, i, j ....) ! use C_INTERFACE real (kind = c_float) cfunc real (kind = c_float) x integer (kind = c_long) i integer (kind = c_int) j ... end function end INTERFACE The actual call would look like: integer p, q real a, r ... r = cfunc(x, p, q ...) The full proposal includes more detail about other types. The general idea is that the functionality of the explicit interface is extended such that the compiler is responsible for actual conversion of the arguments to match the argument mechanism and types expected by C - e.g. value instead of address, and proper "size" of ints and reals. The compiler knows to do this because of the "extrinsic" specification. There is a minor difficulty with the case of the C name that the compiler will have to resolve. A straw poll was take on the question of whether we want to do something like this in HPF: 12 -2 - 10. -------------- Subgroup C - led by Rob Schreiber presented its proposals: -------------- First reading of Asynchronous I/O proposal by Larry Meadows: Example Open (... SYNC='ASYNC') READ/WRITE (UNIT=10, ASYNC='YES') ... Wait(UNIT=10) Syntax: new OPEN specifier SYNC = 'sync' or 'async' new I/O specifier ASYNC='yes' or 'no' new statement: WAIT (Unit-n [, END=label] [, ERR=LABEL] [,IOSTAT=ivar] [, DONE=lvar]) Wait is blocking unless the DONE variable is present, in which case, it will be non-blocking and the done-variable will be set to true or false. Between READ/WRITE and associated WAIT: - no redefinition, undefinition, change in mapping or pointer association for any I/O variable - for READ, no access to any I/O variable - No I/O operation on given UNIT ISSUES Allowed on direct and sequential, formatted and unformatted, advancing and non-advancing forms of I/O. OPEN specifier aids implementor, but doesn't require async I/O. Only one outstanding request per unit, preserves synchronous semantics. Error returned by READ/WRITE or WAIT. WAIT allows polling using "DONE=". WAIT with no outstanding request is no-op. There was discussion about whether there should be handles to identify the I/O. That would allow more than one read or write to be specified for a given unit - but might also lead to confusion about the file position. But with handles, this might facilitate extension to other forms of parallel I/O. There was a question about whether this belonged in HPF or F9x. Mary expressed the opinion that it should go in HPF, because it would be too long to wait for a new round of the Fortran standard. some of parallel I/O issues. As a first reading - as part of HPF, the straw poll was 16 - 2 - 4. -------------- Rob Schreiber reported that for "ON clauses" , a presentation is not ready, but that there is agreement in the subgroup that we want some mechanism for this. He next made a presentation of ideas for new forms of reductions. Reduction operations in loops are very common, but currently they cannot appear in independent loops. Compilers vectorize such loop by promoting the scalar variables to arrays, but this approach may require very large temporaries. The subgroup is looking for feedback about the following ideas. Reductions #1: Example: (where x might be scalar or array) !hpf$ independent ! or is it?? do i=1,33 !hpf$ reduce x= x+ ... !hpf$ reduce x=x- enddo Restrictions: - No other occurrence of reduction variable (x). - Only intrinsic operators of a single family. Semantics: Final value of reduction variable is well-defined, although non-deterministic. Reductions #2: It would also be useful if we could do: !hpf$ reduce X = X .myop. Method 1: Specify initial local value and final merge procedure. Method 2: Roll you own. The model is that there is a local variable with a fan-in at the end of the loop. X is that you would initialize with the identify value for the operation. If X had an initial value, one would merge that in at the end too. For user defined operations, need to know the identify for the user-defined op and how to do the merge at the end. Reductions #3: (close to "roll your own", but with help of "reduce" directive) real local_x (number-of-processors()) !hpf$ distribute local_x (block) local_x = my-init_val() !hpf$ independent, on f(i) do i-1, 34 !hpf$ reduce local_x(fi) = local_x(f(i)) .myop. enddo final_x = my_reduce(local_x(:)) - Simple with ON clause - Still need !HPF$ reduce - A "where-am-i" intrinsic might be necessary? This really isn't independent, but it is independent within a processor as long as the update is done atomically. It was pointed out that compilers can analyze these loops, but that with a reduction operation, the user can't legally use the independent directive (with existing semantics). straw polls about the 3 approaches ... continue to look at #1 (simple operations) 17 - 2- 5 continue to look at #2 (for user defined ops -user gives identity and merge) 3 - 5 - 16 continue to loop at #3 (more user detail) 3 - 8 - 13 To clarify what the above votes said about supporting reductions on user- define operations, there was a straw poll about who thinks this should apply to more than the basic operations in the language: 17 - 3 - 4. Ken interpreted this vote as saying "row harder". In additional discussion, it was commented that the example didn't need the "where-am-i" intrinsic because of the DO ON syntax. Chuck expressed concern that the current definition of DO independent is a statement of fact about do-loops. It will be hard to extend this in the presence of the reduction operation. There was a question about the meaning of calling an XYZ_LOCAL inside either a forall or a DO independent. ---------- Jaspal Sublok gave an overview of the direction the subgroup is going with a proposal for addition of tasks to the language. Task Parallelism Proposal 1) Processor Subgroups, variables tied to a subgroup on ALL For simple statements it may be easy to see what task should do a statement but this is hard with subroutines. 2) Task subroutines that are PURE+ and execute on a subgroup of processors (with ON implicitly). Issues a) Should we do this? It is a significant change even if it looks simple (e.g. recursion, trees, etc.). b) Should we have "globals" visible to subgroups? c) Is it necessary to have explicit tasking sections? Is parallel section necessary? ON 1 call f(a1) ON 2 call g(a2) this is trivial If there is a call that passes a global, then it is more complicated and the compiler has to check: ON 1 call f (a1,a) ON 2 call y (a2, a) Pros: 1) Easy to see what happens 2) May be able to force task parallelism Cons: 1) not clear if it is necessary 2) might force a structure which can make things harder. These apply to subroutines declared as tasks and there would be restrictions on what the routine can access. Straw poll about introduction of feature like this: 9 - 8 - 5. Some of the questions / reservations the group asked were: do we have experience that says it is ok to ignore? is it general enough for SMP tasks? (e.g. is pure restriction is too strong?) how does this interact with I/O? Straw poll whether we should support some form of task parallel:14 - 4 - 5. Not ready for votes on the global visibility. Before the full group split up into continued subgroup meetings, there was a straw poll to find out how many people supported elimination of the HPF subset: 19 - 1 - 3. In jest, it was suggested that perhaps the group should appoint a "feature closing commission". The remainder of the day was devoted to subgroup meetings. ------------- Friday July 28, beginning at 7:45 AM (to allow for early departure for flights). Subgroup D Report CCI #28 (no vote was recorded - so assume this was left open). The proposal is to allow only the :: x(:) form of the align ... On page 24, make the following changes: Line 6, change "entity-decl-list" to "hpf-entity-decl-list" After line 14, add: "H303 hpf-entity-decl is hpf-entity [(explicit-shape-spec-list)] H304 hpf-entity is object-name or template-name or processors-name" After line 20, add: "Constraint:: If an explicit-shape-spec-list appears, hpf-entity must be a template-name or processors-name." On lines 34-35, change two occurrences of "object-name" to "hpf-entity" On line 35, before "If both" insert "If an explicit-shape-spec-list appears, hpf-entity has the dimension attribute." --------- Report from Subgroup C: There was an informal presentation of the idea of ON HOME. The first reading will be next time. Instead of an ON clause specific with DO, the proposal is for an ON assocated with any arbitrary group of statements: ON HOME statement list ENDON where can be HOME of {regular section, template, proc arrangements} These can be nested because ON HOME may select just a few of the processors. For a loop, the ON would be on the body of the loop. These would be directives - not executable statements, but advice. DO I !HPF$ ON HOME (...) ..... !HPF$ ENDON END Various questions were asked about requiring properly nested language constructs, no go-to's into the construct, etc. Also a rule is needed for the case where there are zero processors. ----------- Further ideas from Joel Williamson about reductions inside loops were presented: If the code is (following example) where GS is a user-defined type, what do I have to tell the computer? GS = ... DO I = 1,n GS = GS + ENDDO BIGNUM GS, S; s = @zero() !something for initialization !HPF$ INDEPENDENT DO I = 1,N !HPF$ REDUCE S = S+ ENDDO combine (s, bignum_plus) ! is this a statement or a directive? GS = GS+s Why not put the combine op on the reduce directive? Then maybe could put the identity element on the independent directive. Question about the needs of nested loops. The subgroup will continue discussions. There was general support for more a substantive form of reduce, beyond just plus, and a feeling that subgroup is going in the right direction. ----------- Subgroup E report: At the next meeting there will be a first reading of a kernel proposal and intercallibility proposal, and also a straw proposal for a document organization plan. Carl made a proposal about new requirements for explicit interfaces. The motivation is to make explicit interfaces more consistent with F90, to make section 3.10 easier to understand, and to eliminate the idea that a kernel needs to be a separate model of subprogram interfaces. The proposal is that an explicit interface is required in each of the following cases: (1) a parameter is passed transcriptively or with the inherit attribute (2) the mapping of the dummy arg in not the same as the mapping of the actual. This really doesn't leave a lot of cases there the explicit interface is not required --just the case where both sides know for sure what the mapping is. Then this provides the case where a descriptor isn't needed except for the transcriptive case. The reaction to the proposal was positive, and the group decided to call the presentation a first reading, with a straw vote of 19 - 2 - 0 in support. ------------- The next agenda item was a discussion of proposal schedules for next meetings. Following are the goals for proposal processing. ------------- group E first reading second reading kernel Sept Nov interop Sept Nov format Nov Jan (straw poll in Sept) explicit interf July Sept group D first reading second reading gen block July Sept irreg map July Sept dist exten Sept Nov shadow width Sept Nov Group C first reading second reading async July Sept on Sept Nov reduc Sept Nov task Nov Jan It was noted that nothing on parallel I/O is on the proposal list. Ken and Chuck Koelbel will get together with Alok to deal with this. For now, for document updates, subgroups will work with current chapters. The comment was made that the September meeting looks heavy. We will have a heavy schedule. Please make an effort to do homework ----------- Ken reported on a series of workshops addressing a compiler infrastructure. The first will be Aug. 8-9 in Houston for the research community to get a limited set of goals. At the second workshop in October, the vendors will be invited. Ken will get an announcement out to core about the date and call for interest. A response to X3J3 about the F95 proposal was discussed. Subjects might include the concern about floating point exceptions and approval for including HPF features, such as forall and pure. Carol Munroe and Chuck will review the chapter on elementals. The X3J3 meeting is Aug. 21-25, so comments are due by then. In additional announcements - there was mention of the workshop about experience using HPF compilers that Brian Smith has proposed. His email is smith@cs.unm.edu. We should invite Brian to use the HPF email list if it would be useful for his purposes. We will ask for a 2-hour BOF for SC95 to present the new features under consideration. Rob reported that Walt Brainerd is looking for short articles about HPF for a journal. Rob is coordinating this for Walt, so you can send contributions to schreiber@riacs.edu. There was a review and confirmation of the decision to hold the next two meetings in the Dallas area. (Vote 8 - 4 - 9.) ------------- LPF Report (excerpts): Andy Meltzer LPF implementations - still none. install sites (see HPF site list). The LPF Total Quality Mangling TQM team headed by Joel Mangler Williamson has come up with following questions. Your continued non-involvement is appreciated: 1. LPF Purpose (select one): Performance of your multiprocessor will never exceed that of any component Performance of your multiprocessor will never exceed that of your 2 yr old son. What is a multiprocessor? Don't care 2. How well has LPF done with (1)? well poorly really poorly miserable don't care 3. LPF uses certain pessimizations to achieve performance. Rank importance of each from -5 to -5.1: adding invariant code adding stop and pause every assignment in control redistribute all arrays never deliver compilers hpf committee design languages ADA committee helps don't care 4. What would you add to LPF to help achieve its goals (very tiny box of space for answer). New rules for this LPF meeting all members must propose at least 3 new features features must be arcane must require change to current spec New straw votes for LPF language (official vote) How many people would prefer overhead projector to be shifted over an inch. How many would like the cans of soda lined up differently in the back of the room? How many people agree with congress wrt to Bosnia arms embargo? How many like brussel sprouts? ALL above were official votes - with majority voting abstain/don't care. Straw votes: The LPF favorite color - recommends we adopt puke-green as our color this must be approved as color by X250J57 Committee Meeting adjourned. Next meeting: Sept. 20-22 Dallas area. --------------------------------------- Attending the May 95 HPFF Meeting: Robert Babb U. of Denver babb@cs.du.edu Marc Baber APR Scott Baden UCSD baden@cs.ucsd.edu Barbara Chapman Vienna Univ. barbara@par.univie.ac.at Alok Choudhary Syracuse U. choudhar@cat.syr.edu James Cowie Cooperating Systems cowie@cooperate.com Will Denissen Delft Univ. den_wja@tpp.tno.nl Steve Hammond NCAR hammond@niwot.scd.ucar.edu Tom Haupt Syracuse U. Ken Kennedy Rice U./CRPC ken@rice.edu Charles Koelbel Rice U. chk@cs.rice.edu David Loveman Digital loveman@msbcs.enet.dec.com Larry Meadows The Portland Group lfm@pgroup.com Piyush Mehrotra ICASE pm@icase.edu Andy Meltzer Cray Research meltzer@cray.com Carol Munroe Thinking Machines munroe@think.com Carl Offner Digital offner@hpc.pko.dec.com Harvey Richardson Thinking Machines UK hjr@think.com P. Sadayappan Ohio State University saday@cis.ohio-state.edu Rob Schreiber RIACS schreiber@riacs.edu Jaspal Subhlok Carnegie Mellon jass@cs.emu.edu Paula Vaughan Mississippi St. Univ. paula@erc.msstate.edu Joel Williamson Convex Computer Corp. joelw@convex.com Henry Zongaro IBM Canada zongaro@vnet.ibm.com Mary Zosel LLNL zosel@llnl.gov