What is the difference between numerical solution and approximate solution?

What is the difference between numerical solution and approximate solution. Also when we can say that the method is numerical or approximate method?

Comments

All numerical solutions are approximate.

All approximate solutions are numerical.

When trying to make a model of the actual physical reservoir, all models are approximations, and all models have to be solved numerically.

A numerical solution is when you take the physical equations and solve them using numerical methods or algorithms, i.e use approximation for differential operators assuming small increments. dx/dt = (x1-x2)/dt

An approximate solution is when you solve the physical equations exactly but use approximate values to get the final answer i.e solve the differential equation and then use approximate values. i.e solve dx/dt=2 therefore x=2t +C and use approximate values for x and/or t.

numerical solution are usually applied to highly non-linear equation, e.g complex flow partial differential equation. It might involve the application of finite difference approximation using Taylors series or other discretization algorithm.

Approximate solution applies when you try to obtain a direct solution to a complex polynomial or equation which may involve unequal fractional powers. I always use approximate solution anytime I am dealing with Juhasz or Waxman-Smith water saturation equations.

There can be numerical and analytical solutions. Analitical solutions can be both precise and approximate. Numerical solutions are always approximate.

Problem of analitical solutions is that they can only be applied to very simple problems (i.e. simple geometry of the well and reservoir). So even if the analitical solution is precise the answer you get at the end of the day is approximate because in most cases before using analytical solytion you had to simplify the problem.

When it comes to the numerical solutions in flow simulation there are two subtypes of the models:

“Full physics” models – they pretend to be as precise as it can be. I.e. in flow simulators we try to account for all physical effects we know how to account for. But even if this case it is hard to say that any particular model does “everything possible”. For example you can always refine the grid i a hope to improve solution.As well Full Physics models are known for long simulation times – that may be a problem for uncertainty modeling and assisted history matching in particular.

On the other end there are reduced physics models when we neglect some effects (i.e. gravity, heterogeniety PVT etc.). One simple way of getting “reduced physics model” is coarsening “reduced physics model”. Those models can be very fast but if chosen incorrectly may introduce unrealistic results (especially when soing uncertainty quantification and assisted history matching).

Numerical and approximate solutions are based on the same physics of developing the mathematical equations needed to solve a problem. A coarse model of a reservoir can be considered as an approximate solution (approximate analysis) where a fine gridded model for the same reservoir is considered as a numerical solution (numerical analysis). However, mathematical equations used in both cases are still approximate as they are developed based on assumptions that not necessarily representing the real case e.g. modelling of homogeneous and heterogeneous reservoirs are based on the same mathematical models and amount and quality of structure and fluid data i.e. both are approximate. Reservoirs are systems that are mathematically mimicked based on the available data that by all means will always result in approximate solutions regardless of the amount and accuracy of data used to model those reservoirs.

I think we have may be in danger of making it too complicated.

We have a physical reservoir. We have a mathematical model of the reservoir.

The mathematical model, normally encoded in software, is an approximation to the physical reservoir.

We use the mathematical model to perform studies, and hope the results of these studies are close enough to the physical model to make sound decisions.

We can solve the mathematical model in many ways. If it is a very simple model, we can solve it analytically. In most cases, we have to solve it numerically.

By analytically, I mean you can write out all the equations and the solution can be expressed as a mathematical equation which can (in principle) be solved exactly. Plug the numbers into your equation and you have the solution.

But even here, you may have an analytical solution, but it may be expressed with complex functions, or even simple functions like sin, cos, exp, Bessel etc. and these are still solved using approximate iterative methods, so it is still numerical even if it appears to be analytical.

The analytical solution is an exact solution to the model, but the model is an approximation to the physical reservoir.

The numerical solution is an approximate solution to the model, and the model is also an approximation to the physical reservoir.

In most cases the important difference is not between analytical or numerical solutions (provided care is taken in the numerical solution, which it is in all commercial reservoir simulations), but between the mathematical model and the physical reservoir.

As an example, for a given SPE test problem, different commercial simulators (should) give very similar results, which shows that the numerical solution methods have high accuracy.

But nobody would say those SPE test problems bear much relationship to any known physical reservoir.

As an aside, I don’t know whether they do or not, but it would be comforting if there were models for which the analytical solution was known, and the commercial simulators modelled it numerically to reproduce the analytical solutions. The fact that they give similar results to the same model is not sufficient to demonstrate they give the correct results.

Uncertainty quantification is quite important to verify the numerical solutions by testing against known analytical problems (e.g. variance of simple correlated Gaussians in high dimensions), but I am not aware of any software which does this, hence the big differences we see in uncertainty quantification between commercial vendors.

If your numerical solution cannot replicate the analytical solution on simple problems, or you have not attempted to replicate, go away and come back when it does.

Is the question about the proxy models derived by exact numerical models. ?
If so the the answer could be different.

In order to make it simply, all models of any mathematical kind or physical models are not exacts to a reservoir as it is well known, but numerical or analytical equations of a mathematical model, if they are simple can give exact solutions to the models but not to the reservoirs(even if with feeding by stochastic reservoir data).

Using analytical, numerical or finite difference methods of solutions of complex system or equations are subject to the type of that system and equations(complexity of fluid and flow) of our mathematical model towards exact or approximate solution

What is the difference in petrel and eclipse simulation of the same reservoir model without upscaling it in Eclipse?

What is the difference in petrel and eclipse simulation of the same reservoir model without upscaling it in Eclipse? (Considering Petrel as a simulator not as static modeling software)

Comments
Consider Petrel as a pre and post-processor of data from Eclipse (input data and results).

What does Petrel do when you invoke simulation module? Well it creates all files neede by eclipse, then submits these files to be run in Eclipse…and uploads the simulation results inro Petrel.
Let me be redundant…all data files are still run by Eclipse

are you talking about Petrel RE?… Static models are finer (in grids and details) than dynamic models (Eclipse), so there is a need of a scaling process, so you’ll have coarser grids and averaged properties that might enable you to run your model faster. Is that what you meant with your question?

No, its not about upscaling of cellular model but it was about the use of Petrel as a simulator and comparison with Eclipse.

Petrel is not a numerical simulator.

Petrel RE can create the files needed for ECLIPSE simulator and load the results back for analysis. When you run a numerical simulation from Petrel, you are invoking ECLIPSE to do the simulation.

It is the same thing other than Petrel RE is much easier to use. Maybe you lack some keywards, but you may find them on on Define Simulation, Advanced, Editor and than it would make your life much easier

Petrel is not a simulator, the simulator is Eclipse.. you can consider petrel as an integrated software where all the pre-processor and post-processor data is created and displayed.. Without Petrel you have to use other separated modules to create your data like FloGrid.. and modules to display your results like Floviz and Eclipse office

Eclipse is a Simulator and Petrel just makes static Geological Models and also Petrel convert the fine Model into Coarse by Upscaling method…an other difference is the grid used in Petrel is corner point which provide good flexibility for the shape of Geological Model while in Eclipse we can’t Simulate with this geometry of corner point cells as we are dealing with the flow rate which are controlled by Pressure gradient and by Darcy’s law so the gridpoint should be in the centre(midway btw the nodes)…..then we can get good flow behavior for simulation…..

If I understand Petrel is much used for the geological model and Petrel RE module who is under development by the owner Schlumberger provides a predefined idea of the dynamic model, but we can not do without use eclipse module is the most comprehensive dynamic simulation while knowing that the files are generated from petrel.

Maybe PETREL RE will replace Eclipse thereafter, it will have one complete software for static geological model and development model dynamics.

Petrel will never replace Eclipse as simulator … Schlumberger comme up with new simulator called Tempest which allows lot of flexibility in terms of grid shape …. Petrel is a pre and post precessor and Eclipse is the simulator …. The only good thing I found by running models through Petrel is the algorithm that generates transmissbities in the model … it make the model running bit faster and stable ……

Petrel will never replace ECLIPSE, Petrel is a post & pre processor for Eclipse simulation model.

The advantage now with Petrel is that you can run assisted history matching using the objective function & uncertainty & optimization….

Petre RE is not a simuator, I tis like an early easy interface to generate the eclipse files before simulation occurs ( in eclipse software), the developement strategy for example is a plus.

I just want to know if the upscaling in the Pertel RE is exactely the same as that one done by petrel ( if someone expereinced this before)

Petrel RE only as interface to bring all the input to the dek files and after that its still need enginge which is simulator called eclipse 100/300

We use Petrel RE for simulation due to the option of pre-post processing. Petrel RE has the same deck with the Eclipse Office and the simulation is run also with Eclipse. But the uncertainty and the optimization tools are very helpful if you think about the uncertainty analysis. If you are planning to have some intelligent well completions you can easily handle with well segmentation and installation of ICDs/ICVs in Petrel RE with the completion modules. Petrel RE has also the plug-ins like Mepo and Olyx. Therefore you can carry out a whole study from static modeling to the sensitivity analysis in one project. This will give you the option to have a better view of your results. It was also mentioned above almost every keyword is available also in Petrel RE but if you cannot find the keyword that you need you can manually add in the deck file (DATA file) or you can add/edit manually in Advanced sections. The only problem in Petrel RE is that if you are planning to run a Radial generic model, unfortunately you cannot easily create this geometry. You have to manipulate it

Tempest is nothing to do with Schlumberger, it is a simulator from Roxar. Maybe you are thinking of Intersect?

all above comments are valid about Petrel-RE, but I need to clarify the following:

1- Petrel-RE is never going to replace ECLIPSE for a simple reason that ECLIPSE is our simulation engine that solves the flow equations which Petrel is not going to do.
2- Our new Simulator is INTERSECT but is NOT a replacement for ECLIPSE. It’s used for very specific cases when having a highly fractured reservoir with millions of cells.
3- Schlumberger have nothing to do with Tempest 🙂

Tempest is a simlator of ROXAR Ltd. The logic and structure of data files of eclipse is very similar. However TEMPEST has less options for modeling than Eclipse but simulators results are converged

What do we really mean when we say that Petrel is a pre and post processor for eclipse simulation model?

Pre-processor because Petrel RE will help to create all files that Eclipse needs before submiting a run.

Post-processor because you may observe all results reported by Eclipse in either 3D grid or graphs, after Eclipse ends a run.

Can I run the files generated by a newer version of petrel (Petrel 2010) on an older version of Eclipse (Eclipse 2009) in the pre-processing stage. And if that’s possible can we do it the other way round, at the post-processing by exporting the older Eclipse simulation results into the newer Petrel??

The answer is yes, you can run DATA files generated with Petrel 2010 on an older version of Eclipse, provided that the keywords generated by Petrel are supported in your Eclipse release. The inverse is also affirmative. Eclipse results of any version can be loaded into Petrel.

i want to work on” Field development of gas condesate reservoir using compositional simulator.”as my project.i want yours help to work on simulator.which software in simulation should i use?

Petrel RE helps to you create your simulation model (that’s what we call pre processing), and then you can use Eclipse 300 (compositionnal) to run your model after that in Petrel RE you can analyse your results (line plots, 3D, 2D) and this is post processing.

I suggest to use PVTi or PVTsIM in order to calibrate your equation of state.

how can i load the Petrel model in Eclipse?

You cannot load the Petrel model in Eclipse unless you run the simulation. You can run the simulation in Petrel itself or you can export the Petrel keywords for your data deck. Those keywords are: GRID, PROP (porosity, permeability, water saturation), REGIONS ( FIPNUM, EQLNUM), SOLUTION (OWC, reference depth) etc. Once you run the simulation than you can see you 3-D model parameters in Eclipse.

Petrel RE is a visual aid for Eclipse code. You can run the simulation (waterflooding etc.) and visualize it. You can see the input data as plots (kr, pc etc) and make cross-plots (water cut, recovery etc.). Verry valuable tool.

you can also do the process in reverse. Create the model in Petrel and then export it to Eclipse, then re-import it for visualization.

I am working on ECLIPSE and Petrel.

ECLIPSE is a simulator.
Petrel is a pre and post processor tool. If you simulate a model in Petrel, it still uses ECLIPSE simulation engineer (or INTERSECT, new fast simulator). Therefore Petrel is NOT a simulator.
Petrel is easy for visualisation of the grid, simulation results etc.

In Russia that’s may be differences at less 5 % (after upscaling before hydrodinamic simulation) – thats percents are allowing, when Russian Central comissy for Production for oil fields is studing working simulations models of russian oil fields every 3 – 5 years

The question is without ‘UPSCALING’. That’s really a good question.

When you build the geostat model usually in Petrel, you try to honor the seismic, geological and petrophysical properties. To do that you need to build a high resolution Grid, which means with a large number of cells. Now let’s come to ‘upscaling’. Upscaleing means to reduce number of cells in order to run it in Eclipse. But that depends on the power of computer that you have. If you have a computer with 4-8 Gb RAM means your computer is not powerfull, so for a model above 100,000 cells you need to upscale. So my answer to your question is: it’s a matter to honor the model properties in compliance with seismic, geology and petrophysical. Upscaling usually means bad work.