Variables and Starting Conditions
All of the functions
FindMinimum,
FindMaximum, and
FindRoot take variable specifications of the same form. The function
FindFit uses the same for its parameter specifications.
FindMinimum[f, vars]  Find a local minimum of f with respect to the variables given in vars. 
FindMinimum[f, vars]  Find a local maximum of f with respect to the variables given in vars. 
FindRoot[f, vars]  Find a root f = 0 with respect to the variables given in vars. 
FindRoot[eqns, vars]  Find a root of the equations eqns with respect to the variables given in vars. 
FindFit[data, expr, pars, vars]  Find values of the parameters pars that make expr give a best fit to data as a function of vars. 
Variables and parameters in the "Find" functions.
The list
vars (
pars for
FindFit) should be a list of individual variable specifications. Each variable specification should be of the following form.
{var, st}  Variable var has starting value st. 
{var, st_{1}, st_{2}}  Variable var has two staring values st_{1}and st_{2}. The second starting condition is only used with the principal axis and secant methods. 
{var, st, rl, ru}  Variable var has starting value st. The search will be terminated when the value of var goes outside of the interval [rl, ru]. 
{var, st_{1}, st_{2}, rl, ru}  Variable var has two staring values st_{1}and st_{2}. The search will be terminated when the value of var goes outside of the interval [rl, ru]. 
Individual variable specifications in the "Find" functions.
The specifications in
vars all need to have the same number of starting values. When region bounds are not specified, they are taken to be unbounded, i.e.,
rl = 
,
ru =
.
Vector and Matrix Valued Variables
The most common use of variables is to represent numbers. However, the variable input syntax supports variables that are treated as vectors, matrices, or higher rank tensors. In general, the
Find commands, with the exception of
FindFit, which currently only works with scalar variables, will consider a variable to take on values with the same rectangular structure as the starting conditions given for it.
This uses FindRoot to find an eigenvalue and corresponding normalized eigenvector for A.
Out[2]=  

Of course, this is not the best way to compute the eigenvalue, but it does show how the variable dimensions are picked up from the starting values. Since
has a starting value of 1, it is taken to be a scalar. On the other hand,
x is given a starting value, which is a vector of length 3, so it is always taken to be a vector of length 3.
If you use multiple starting values for variables, it is necessary that the values have consistent dimensions and that each component of the starting values are distinct.
This finds a different eigenvalue using two starting conditions for each variable.
Out[3]=  

One advantage of variables that can take on vector and matrix values is that it allows you to write functions, which can be very efficient for larger problems and/or handle problems of different sizes automatically.
This defines a function that gives an objective function equivalent to the ExtendedRosenbrock problem in the UnconstrainedProblems package. The function expects a value of x which is a matrix with two rows. 
Note that since the value of the function would be meaningless unless
x had the correct structure, the definition is restricted to arguments with that structure. For example, if you defined the function for any pattern
x_, then evaluating with an undefined symbol
x (which is what
FindMinimum does) gives meaningless unintended results. It is often the case that when working with functions for vector valued variables, you will have to restrict the definitions. Note that the definition above does not rule out symbolic values with the right structure. For example,
ExtendedRosenbrockObjective[{{x11, x12}, {x21, x22}}] gives a symbolic representation of the function for scalar
x11, ...
This uses FindMinimum to solve the problem given a generic value for the problem size. You can change the value of n without changing anything else to solve problems of different size.
Out[7]=  

The solution did not achieve the default tolerances due to the fact that
Mathematica was not able to get symbolic derivatives for the function, so it had to fall back on finite differences that are not as accurate.
A disadvantage of using vector and matrix valued variables is that
Mathematica cannot currently compute symbolic derivatives for them. Sometimes it is not difficult to develop a function that gives the correct derivative. (Failing that, if you really need greater accuracy, you can use
higherorder finite differences.)
This defines a function that returns the gradient for the ExtendedRosenbrockObjective function. Note that the gradient is a vector obtained by flattening the matrix corresponding to the variable positions. 
This solves the problem using the symbolic value of the gradient.
Out[11]=  

Jacobian and Hessian derivatives are often sparse. You can also
specify the structural sparsity of these derivatives when appropriate, which can reduce overall solution complexity by quite a bit.