Separation: The underlying problem is to develop techniques so that code that manages latency can be developed and maintained separate from the code the implements application functionality.
Proability: Use of probabilistic models for describing meta-data (and, hence, prioritizing data sources), and (2) extensions of techniques involving materialized views in data mediation.
Profiles: The technical issue I'd like to raise is enabling users to register "requests for data", which could take the form of profiles or views, in an environment with large numbers of users and data sources.
Multiple copies: Techniques for managing multiple copies of data can provide improved availability, scalability and locality resulting in bringing us closer to zero-latency.
Workflow: The concept of Zero-Latency to be expanded from its current narrow database focus to include both software and control (i.e., workflow).
Anticipate: In order to answer a user's query before it is asked, novel techniques have to be developed to anticipate answers based on the user's profile; since such anticipated answers might not be the fully correct answer, a benefit/cost model should allow to determine, if additional queries to make the answer more complete are required.
Joint Modeling: Combining Task Modeling with User Modeling to predict user information requirements.
Hypothetical Reasoning API:. A unified API for transactionally interacting with processing systems to obtain the predicted required information when that information is from a (possible) future state of the processing system and the processing system must be put into that state to obtain that required information) while preserving the option to abort the transaction should the user choose an alternative course of action.
Limits of precomputation? You focus very heavily on precomputation. This is wonderful when possible, but by no means always possible.
Tradeoff: explicitly making the tradeoff between quality and time that you discuss, though perhaps not in the sense of "consistency" that I think you describe. "Online Aggregation" is a piece of our bigger picture.
Partial precomputation: Compute (multiple alternative) intermediate results, that seem more stable, and combine with recent data when called for.