Skip to content

A stock’s price is equal to the net present value (NPV) of all expected future dividends. (See the article elsewhere in the FAQ for an explanation of the time value of money and NPV.) A company will plow its earnings back into the company when it believes it can use this money better than its investors, i.e., when the investment opportunities it has are better than its investors have available. Eventually, the company is going to run out of such projects: it simply won’t be able to expand forever. When it gets to the point where it cannot use all of its earnings, either the company will pay dividends, it will build up a cash mountain, or it will squander the money. If a company builds a cash mountain, you’ll see some investors demand higher dividends, and/or the company management will waste the money. Look at Kerkorian and Chrysler.

Sure, there are some companies that have recently built up a cash mountain. Microsoft, for instance. But Gates owns a huge chunk of Microsoft, and he’d have to pay 39.6% tax on any dividend, whereas he’d have to pay only 28% (or perhaps 20%) on capital gains. But eventually, Microsoft is going to pay a dividend on its common shares.

From a mathematical perspective, it’s quite clear that a stock price

is equal to the NPV of all future dividends. For instance, the stock price today is equal to the NPV of the dividends during the first year, plus the discounted value of the stock in a year’s time. In other words, P(0) = PV (Div 1) + P(1). But the price in a year is equal to the NPV of dividends paid during the second year plus the PV of the stock at the end of two years. If you keep applying this logic, then the stock price will become equivalent to the NPV of all future dividends. Stocks don’t mature like bonds do.

Of course it’s also true that a stock’s price is equal to whatever the market will bear, pure supply and demand. But this doesn’t mean a stock’s price, or a bond’s price for that matter, can’t have a price that is determined by a formula. (Unfortunately, no formula is going to tell you what dividend a company will pay in 5 years.) A bond’s price is equal to the NPV of all coupon payments plus the PV of the final principal payment. (You discount at an appropriate rate for the risk involved). Any investment’s price is going to be equal to the NPV of all future cash flows generated by that investment, and of course you have to discount at the correct discount rate. The only cash flows that investors in stocks get are from the dividends. If the price is not equal to the NPV of all future cash flows, then someone is leaving money on the table.

A network perimeter is the boundary between the private and locally managed-and-owned side of a network and the public and usually provider-managed side of a network.

In software development, a test suite, less commonly known as a validation suite, is a collection of test cases that are intended to be used to test a software program to show that it has some specified set of behaviours. A test suite often contains detailed instructions or goals for each collection of test cases and information on the system configuration to be used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

Test fixture in xUnit

In generic xUnit, a test fixture is all the things that must be in place in order to run a test and expect a particular outcome.

Frequently fixtures are created by handling setUp() and tearDown() events of the unit testing framework. In setUp() one would create the expected state for the test, and in tearDown() it would clean up what had been set up.

Four phases of a test:

1. Set up — Setting up the test fixture.
2. Exercise — Interact with the system under test.
3. Verify — Determine whether the expected outcome has been obtained.
4. Tear down — Tear down the test fixture to return to the original state.

[edit] Use of fixtures

Some advantages of fixtures include separation of the test initialization (and destruction) from the testing, reusing a known state for more than one test, and special assumption by the testing framework that the fixture set up works.

All-pairs testing or pairwise testing is a combinatorial software testing method that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters, by “parallelizing” the tests of parameter pairs.

The reasoning behind all-pairs testing is this: the simplest bugs in a program are generally triggered by a single input parameter. The next simplest category of bugs consists of those dependent on interactions between pairs of parameters, which can be caught with all-pairs testing.[1] Bugs involving interactions between three or more parameters are progressively less common[2], whilst at the same time being progressively more expensive to find by exhaustive testing, which has as its limit the exhaustive testing of all possible inputs.

The 1×1 strategy is sufficient for detecting boundary errors and the Nx1 strategy is effective foretermining exactly what type of boundary error is present, where N is the dimension of the input space.  The 1×1 strategy tests two points for each boundary inequality, one on the boundary and one off the boundary.  The two points are chosen as close as possible to one another to ensure that domain shift errors are properly detected. If two domainsshare a boundary, then the off point is always in the domain that is open with respect to that boundary.

The Nx1 strategy tests N+1 points for each boundary inequality, N points on the boundary and 1 point off the boundary. The off point is chosen at or near the centroid of the on points. One must always be careful to choose the off point so that it is in a valid domain, i.e. one that leads to valid computations or to a specific error condition, otherwise the point may be
rejected for coincidental reasons, e.g. by an initial analysis that rejects all points not in a domain that leads to subsequent processing.

Single sign-on (SSO) is a property of access control of multiple, related, but independent software systems. With this property a user logs in once and gains access to all systems without being prompted to log in again at each of them. Single sign-off is the reverse property whereby a single action of signing out terminates access to multiple software systems.

In cryptography, a zero-knowledge proof or zero-knowledge protocol is an interactive method for one party to prove to another that a (usually mathematical) statement is true, without revealing anything other than the veracity of the statement.

Code duplication is generally considered a mark of poor or lazy programming style. Good coding style is generally associated with code reuse. It may be slightly faster to develop by duplicating code, because the developer need not concern himself with how the code is already used or how it may be used in the future. The difficulty is that original development is only a small fraction of a product’s life cycle, and with code duplication the maintenance costs are much higher. Some of the specific problems include:

* Code bulk affects comprehension: Code duplication frequently creates long, repeated sections of code that differ in only a few lines or characters. The length of such routines can make it difficult to quickly understand them. This is in contrast to the “best practice” of code decomposition.
* Purpose masking: The repetition of largely identical code sections can conceal how they differ from one another, and therefore, what the specific purpose of each code section is. Often, the only difference is in a parameter value. The best practice in such cases is a reusable subroutine.

* Update anomalies: Duplicate code contradicts a fundamental principle of database theory that applies here: Avoid redundancy. Non-observance incurs update anomalies, which increase maintenance costs, in that any modification to a redundant piece of code must be made for each duplicate separately. At best, coding and testing time are multiplied by the number of duplications. At worst, some locations may be missed, and for example bugs thought to be fixed may persist in duplicated locations for months or years. The best practice here is a code library.

Free Web Hosting