Professional Documents
Culture Documents
Eugenio Maul 2010-10-22 content in brackets corresponds to optional arguments, do not necessarily write them as part of the command.
System
Setup
pwd
shows
you
current
working
directory
shell dirs -l
will
result
in
the
same
shell mkdir [exitingdir/]dirname
You
can
also
include
quotation
marks
if
referencing
outside
your
source
directory.
set mem 500m [,permanently]
Will
set
the
memory
to
500m
clear
clears
the
memory
from
information
retained
there
allowing
you
to
open
a
new
file
set more off Useful
to
avoid
screen
breaks
(pauses)
when
running
long
code.
cd "/Users/johndoe/Work"
to
change
to
the
specified
working
directory
erase filename
or
"/path/filename" or
rm filename
will
delete
the
specified
file
however
is
has
to
be
in
the
working
directory
or
this
directory
must
be
changed.
With
erase
if
the
file
is
not
there
prompts
an
error.
With
rm
must
specify
file
extension
typically
.dta
but
better
because
no
error
prompt.
save filename.dta [,replace]
the
,replace
option
should
be
almost
always
specified
to
prevent
errors
when
stata
attempts
to
overwrite
a
file.
findit keyword
will
look
in
stata
supported
sites
and
my
help
files
for
the
keyword.
typical
pathway
to
find
user
written
functions
on
the
web
that
can
do
something
specific.
Every
Do
File
Should
Start
With
capture log close
useful
command
to
start
a
do
file,
if
a
log
is
open
you
cannot
open
another
log
file
unless
the
previous
has
been
closed
or
an
error
will
be
generated
set memory 100m[, permanently]
Setting
More
Memory
permanently.
The
stata
default
is
very
low.
version 11
useful
command
to
ensure
a
dofile
is
run
only
in
stata
11
set linesize 80
useful
to
determine
length
of
lines
in
log
file
set more off
gets
rid
of
annoying
pauses
while
running
code
clear
all
useful
command
macro drop _all
also
useful
to
prevent
previous
macros
from
bothering
me.
log using logfilename, replace text
will
open
a
txt
based
file
in
the
current
directory
where
the
data
sent
to
the
output
will
be
saved.
list varlist [if var1==1] This command will allow you to list the variables specified in the variable list (see below on how to specify a varlist). Using the if command will allow you to list observations that meet a specific criteria. save file1.dta [,replace] Most of the time you will use replace, but avoid by all means overwritting the original dataset. Typically in a do file you use the source file (.txt) and end the dofile with a stata .dta file that you save. Since the process requires to be repeated several times while improving your code the replace will prevent you from having mutiple copies of the dataset and being confused about which is the right one. Observations in the Dataset set obs ### will change the number of observations (rows) in the datased to the specified number as long as the number is larger than the current number of observations. The newly created observations will have missing data. Can also be used to start a database from scratch in stata (rarely used). sort var1 Ascending sort based on var1 sort var1 var2 var2 Will sort the variables based on the levels of var1, then var2 and lastly var3. gsort -var1 var2 Useful when you want to sort the variables in descending instead of descending order. In this case the first level of sort is based on var1 (descending) and then var2(ascending) preserve This command will save a copy of the current database in the virtual memory. Useful when you want to run a command that will delete or modify important part of the data to get to a calculation, but you want to have a safe and fast way to going back to the current point restore This command will bring the data back to the status at the time of the last preserve command. typically used in do files.
var label var1 "Var1's label" This will assign a name to the variable, often useful because we use short variable names and might not remember what they mean. label define examplelabel 0 "No" 1 "Yes" This creates a label tag where 0 is No, and 1 is yes. label var varlist examplelabel this command is used to apply the label example label to all the variables specified in the variable list, typically on variable at a time. When you type edit, you will see that the database now displays the label and not the actual value for that ariable. But it is numbers that are stored. numlabel _all,add [mask(#)] will add the number to all the value labels. For example instead of No and Yes, the label would be 0.No and 1.Yes. This can be useful sometimes when you have many different categorical variables. To remove simply type numlabel _all, remove label list will provide you with a list of labels and their values, that have already been defined in the dataset. label drop labelname will drop the label from the dataset label define examplelabel 3 "dont know",modify will add the value label or modify and exisiting label for that specific number label save using testfile will save a do file with the code to reproduce the labels in the current dataset. drop varlist will drop the specific variables from the dataset. In this case the dropped elements will be full columns. drop if (var1==1 | var2==1) & var3==2 If no varlist is specified and the command is used with an "if" statement, selected rows will be deleted. In this case all observations where var1=1 or var2=1 while at the same time var3 is also=2. note that this are relational and not mathemathical statements and therefore we use "==" and not "=". by id,sort: drop if _n!=1 for each unique observation of id will drop all observations that are not the firt observation. keep varlist will drop all but the specified variables from the dataset. generate [double] newvar= numvar1+numvar2 generating numeric variable. Mathemathical variable operators are + ; - ; * ; / ; ^ generate newstringvar= "textvar1" + " " + "textvar2" Generating a string variable from 2 string variables. generate newlogicalvar= (sex==2) This var will be = 1 if true and 0 if false. Relational operators are == (for equal), != (for not equal), >, >=, <, <=. Logical operators are & (for and), | (for or) and ! (for not). generate newlogicalvar2 = (sex!=1 & sex !=.) Assuming the same variable has only 1(male), 2(female) or .(Missing) values. This second variable results in exactly the same. generate newlogicalvar3 = (sex!="male" & sex != "") This is the same as newlogicalvar2 but sex is a string instead of a number. rename oldvarname newvarname Will rename a variable String-to-Number or Number-to-String destring var1,replace [force] will convert var1 previously a string into a numeric variable. The force option will make any non number in the variable a missing value. use with
caution because you might loose info, but when missings are coded for example with "M" and the rest of the variables are numbers this command will be very useful. tostring var1,replace [force] will do the opposite, except that the missing values "." in the numeric form are changed to nothing "". encode stringvar1, generate(codedvar1) A string variable is transformed to numeric variable. The text in the variable is assigned to a string that labels each number. Numbers start from 1, and assigned according to alphabetical order of the variables. decode codedvar1,generate(stringvar1) Will transform a labeled numeric (typically categorical) variable into a string assigning the label to the value in the string. Recoding and Percentiles-Quantiles Calculation recode var1 (0=1) (1=2) Will change the zeros for 1 and the 1 for 2 in var1. If the variable had value labels you will have to drop, change or create and assign a new value label or you will end up with a mess. In my opinion it is safer to use recode with the generate option seen below. recode var1 (min/5=1) (5/10=2) (10/max=3) If there is overlap, numbers will be put in the lower category. i.e. in this case number 5 will be recoded as 1 and not 2. recode var1 (0=1) (1=2),gen(var2) In this case a new variable var2, is created from the recode process while preserving the original variable data. xtile newvar=oldvar,nq(3) Will create 3 tertiles based on oldvar.
String
Functions
gen newvar=word(oldvar,1) Will
return
the
first
word
of
the
string
variable
newvar.
A
-1
will
return
the
last,
so
on
so
forth.
gen newvar=substr(oldvar,3,7) Will
create
a
variable
with
the
third
to
seventh
character
of
oldvar.
If
we
use
.
instead
of
7,
then
it
returns
from
third
to
the
last
character.
gen newvar=subinstr(oldvar,"aa","bb",2) Will
generate
a
variable
equal
to
oldvar,
but
the
first
2
matches
for
aa
in
the
text
will
be
substituted
for
bb.
If
you
type
""
instead
of
"bb",
they
are
eliminated
from
the
variable,
if
you
type
.
instead
of
2,
all
the
"aa"s
in
the
variable
will
be
replaced.
help string functions for
more
useful
functions
Merging
and
Appending
Data
merge 1:1 indexvarlist using file2.dta There
must
be
an
open
dataset
in
stata,
this
command
will
make
stata
add
the
information
of
file2.dta
(must
be
in
stata
format)
to
file
1
based
on
matching
of
the
indexvarlist.
Indexvarlist
(as
a
combination)
must
have
no
duplicates
in
either
file.
Typically
it
is
one
variable
called
id,
the
name
for
index
variables
must
be
the
same
on
both
datasets
or
the
command
will
not
work.
A
new
variable
called
_merge
will
be
created,
if
the
value
is
3
it
means
there
is
a
match
between
both
datasets.
Values
of
1
or
2
means
a
match
only
in
either
the
first
or
second
dataset.
You
often
only
want
to
keep
the
observations
that
match
only.
In
that
case
after
the
merge
command
type
keep
if
_merge==3
merge 1:m indexvarlist using file2.dta For the unique values of the indexvarlist combination there can be many records in the using file. append using file2.dta, [generate(newvar) force] Will add observations (rows) to the dataset instead of variables(cols) as done by merge. The generate(newvar) will create a variable that indicates the source of each row, the force will stack variables with the same name even if in one dataset it is string and in the other it is numeric. Date and Time generate newdatevar= date(olddatevar,"DMY",2010) generating a date variable from a string containing a date. Stata by default imports all dates as txt files, they must be converted. See help for other date formats, in this case stata will assume the string variable has the date in the format dd-mm-yyyy, if there are some 2 digit years they will be set to 19XX if >2010, and 20XX if <=2010. The variable is a number that represents number of days from 01Jan1960 up to today. To display as date rather than numbers you should type format %td newdatevar (%td here is the same as %tdDDmonCCYY) generate double newdatetime= clock(olddatetimevar, "DMYhms") Smilar principles as above, this will be the number of milliseconds elapsed since 01Jan1960. To preserve precision it is important to type double. To display as the date time now type format %tc newtimevar (%tc here is the same as %tcDDmonCCYY_HH:MM:SS) generate double newtime= clock(oldtimevar, "hm") We asume oldtimevar has time in the form of 23:50 or 11:50pm. Similar principles as above, this will be the number of milliseconds elapsed since 01Jan1960. To preserve precision it is important to type double. To display as the time now type format %tcHH:MM newtimevar (23:50) or alternatively %tchh:mmam (11:50pm). generate before2000=1 if datevar < date("01-012000","MDY") The last part of the command is called a date pseudofunction, the same could be done with a time pseudofunctions. his command will generate a variable that will have a 1 only if datevar is before 2000, the rest of the values of the variable will be missing. See cond command in programming functions for details on how to generate in one command 1 <2000 and 0 > 2000. A second simple way of using one line is with a logical expression: generate before2000= (datevar> date("01-01- 2000","MDY")) if datevar !=. generate dayvar= dow(datevar) This variable returns the day of the week to the newly created dayvar. Instead of dow you can use day(for day of month), month(for month of the year), year(for year), doy(for day of the year). In a time-date variable using hh(will retunr number of transcurred hours in the day), mm(number of minutes of the day), to extract date from a time date variable you must first convert to date using the dofc(date- timevar) function. Remember that in one hour there are 60*60*1000 miliseconds. generate age= (td(01jan2010)-dobvar)/365.25 This is the same as generate age= (date("01-01-2010","MDY")- dob)/365.25 Reshaping Data
reshape long varlist, i(uniqueidvars) j(newvar) [string] Will reshape data from wide to long format. Uniqueidvars are the variable - variable combination that identifies the row in the wide format). The varlist is the list of variables to be reshaped to the long format. The optional "string" is very useful and it indicates that the newvar content can be of string type for example if reshaping iopod and iopos to long, I specify reshape long iop, i(id) j(eye) string and the newvar will be iop with values "od" or "os". This is a hard to learn command but hugely useful. reshape wide varlist, i(wide-id-vars) j(suffixvar) [string] Same as above the string variable is optional. The suffix var means that all reshaped variable names will be added the value of suffixvar. the iop var now has iopod and iopos version. Wide-id-vars means the combination of those var values will canstitute a unique value in the wide format. collapse varlist1, by(varlist2) varlist1 can be cont categorical (numeric) variables, typically continuous. Varlist2 is typically categorical. For each unique combination of observations defined by varlist2 the mean of varlist1 is created and all the variables that are not in varlist1 or varlist2 are eliminated. i.e. collapse (mean) avgage=age numwt=wt (count) numkids=birth, by(famid) you can specifically indicate what kind of summary measure you want generated for each of the vars in varlist1 expand, stack, expose, assert Are other useful commands, see help. Basic Programming Functions #delimit ; Will change the newline from carriage return to semicolon. Can only be used in do files #delimit cr will change back the line delimiter from ; to cariage return. /* */ in a do file whatever is between both signs is ignored by stata, no matter how many lines are in between. ** line ignored by stata after the two signs // same as above /// added at the end of a line will concatenate the current and next line as one when read by stata in a do file. by as a prefix is very important to obtain separate analysis for different subpopulations depending on a variable by catvar1 , sort: egen contvar2=mean(contvar3) (var 1 categorical, var 2 and 3 numerical) By each level of catvar1 a new variable called contvar3 is generated with the average of the of contvar2 bysort catvar1: In the example above it would be equivalent to by catvar1,sort: if Example uses of If: List If Missing(Age) or list if age==. (< less than ,<= less than or equal, == equal, > greater than, >= greater than or equal, != not equal, & and, | or, ! not (logical negation; ~ can also be used). ( ) parentheses are for grouping to specify order of evaluation. In logical operations & is evaluated before |. The order of evaluation (from first to last) of all operators is ! (or ~), ^, - (negation), /, *, - (subtraction), +, != (or ~=), >, <, <=, >=, ==, &, and |. generate id = _n Creates a variable with the observation number generate nt = _N Creates a variable with the total number of observations available
[_n+1] Use of _n to calculate across rows. i.e. gen deltavar= var1- var1[_n-1] this system will calculate the delta from the previous observation (current minus previous value of var1) Of course it is assumed that they have been correctly sorted before (i.e. by time). sort id, and then list if id == id[_n + 1] will find duplicates asciiplot will generate a plot with signs that you can insert using the char() function. di char(240) will display the Mac "apple sign". This can also be used in the title of a graph for example using title("This is a test to display `=char(240)' ") generate newvar=cond(var1>10,1,0) Just like in excel you can nest multiple arguments using if, In stata you can do it using cond command. gen newvar=cond(expression, result if expression is true, relut if expressoin is false) You can nest as many conditions as you want apparently there is no limit, except for the limits for an expression length. local localexpression = expression local localexpression "string" This is called a local macro, they are typically used within a specific do file to run code on a dataset. Restrict = to mathemathical expressions or calculated expressions. The length of this macros is limited compared to macros saved as text using quotes. local localexpression : after the colon you can use an extended function (see help local) `localexpression' note the special kind of quotes used to invoke saved local macros in stata. return list ereturn list see help for details, but this command is useful to return the scalars (rclass) and eclass objects generaed after a command.
Loops
foreach x in any_list { foreach x of local lmacname { foreach x of varlist varlist { forval x = numlist { All
of
this
are
potential
loops
that
excecute
the
same
commands
specified
within
the
loop
for
each
of
the
x
elements.
Within
the
loop
they
must
be
cited
as
`x'
rather
than
x.
Loops
cnan
be
nested
within
each
other.
You
can
replace
the
x
for
any
word
or
letter
that
you
like.
Each
loop
must
always
be
clsed
with
a
closing
curly
bracket
}.
if exp { ; commands ; } ; else if { ; commands ; } ; (
";"
represent
new
line)
harder
to
use
than
loops,
save
for
later
when
more
experienced.
while expression { ; commands ; } ; similar
to
if,
less
efficient.
gen cumprob=binomialtail(n,x,p) So cumprob is the probability of X(my outcome) being greater than > x out of n trials. Given a probability of sucess in one attempt of p and n attempts. gen nprob=binomialtail(n,x,p)binomialtail(n,x+1,p) This would generate the probability of actually getting exactly x successes in n attempts with a one trial probability of p. gammap(x,k) Using poisson distribution it is the probability of observing x or more events when k is expected. (P X>=x). gammap(x+1,k) this calculates the probability of greater than x events. cprob=1- gammap(x+1,k) this calculates the probability of x or fewer events. 1-gammap(1,k) This calculates P(X<=0) which is the same as P(X=0). gammap(x,k)-gammap(x+1,k) this calculates the probability that p(X=x) normal(z) gives you the cumulative probability for a given z value. invnormal(p) gives you the Z value for that cumulative probability ttail(n,t) Gives you the 1 - cummulative distribution of a t value for n degrees of freedom. invttail (n,p) gives you the t value at which 1-cummulative is equal to p. (i.e. invtail(1000,0.975) = -1.96. Missing Values any_number < . < .a <. b etc. A missing value is considered positive infinity in stata. In addition to ., you can have .a-.z, the higher the letter the higher the value. mvdecode var1, mv(0=.) var1 values = 0 are changed to missing misstable sum (varlist) gives a count of missing values per variable (VERY USEFUL) misstable pattern (varlist) gives an estimate of the rows with missing values. Useful for multiple imputation to determine the appropriate options to impute data depending on the pattern. egen newvar= rowmiss(varlist) newvar will contain the numbe of missing variables of the varlist in the row egen newvar=anycount(varlist), values(99) This is good if I have assigned a value of 99 to the missing variable. Exploring Continuous Variables (Normality-Outliers) stem varlist Stem and leaf plots graph matrix varlist, half histogram var, (normal kdensity) normal will draw the normal curve kdensity will draw the density function. graph box contvar Boxplot, to display by levels of a categorical variable use graph box contvar, by(catvar) . see dotplot for an alternative. quietly lv contvar ; list contvar if ( (contvar >=( r(u_F) + 1.5*(r(u_F) - r(l_F)))) / (contvar <=( r(l_F) - 1.5*(r(u_F) r(l_F)))) ) Will display Tukey outlier values. swilk contvarlist (explore normality) qnorm var1, grid (to explore normality) boxcox contvar, nolog find optimal scale for normal distribution. If Lambda =1 no transformation, 0 log transform, -1 reciprocal Exploring Continuous Variables (Summary - Dispersion) summarize varlist, (detail) Varlist must all be cont variables. If you add a ,detail it gives you much more information(var, skewness, kurt, percentiles)
tabstat varlist, statistics(n mean sd) Probably the best command available in stata for sumarizing continuous variables. tabstat varlist, statistics(n mean sd) by(categvar) c(s) lo The c(s) means that instead of putting the variable in the cols, the statistic be put in the column, this is much better. lo option makes the left stub wider, and the table will show in one line. table rowvar colvar supercolvar, by(up to 4 superrowvars) contents (statistic var1 statistic2 var2 statistic3 var3...etc) not as used as tabstat, consider for more complex tables. centile varlist, centile(50) cci level(95)95% confidence interval for the median Exploring Categorical Variables tab1 varlist [,missing nolabel] only one categorical variable at a time. The ,missing will include in the report and percentages missing values. The nolabel option will display numbers instead of labels. tab var1 var2,row [chi2 exact] cross tabulation with row percentages, instead of row can use col or all. exact will calculate Fischer's exact test. tabi a b \ c d , [chi2 row col] (tabulation 2x2, useful when reproducing tables from literature that you are reading to verify statistics, or do your own statistical analysis) tab2 varlist can be used to tabulate multiple variables all combinations in the list or just the first against the rest cii n p will produce a confidence interval for a single proportion. bitesti n x p(hypothesized) Testing Differences in Proportions: Unpaired Data prtest varname == #p, level(#) One sample comparison to #p. varname must be coded as 0 and 1. np and nq must be above 5. prtesti #obs1 #p1 #p2 [, level(#) count] count indicates that p1 is a count and not percentage. One sample comparison immediate test. prtest varname1 == varname2 [, level(#)] 2 sample proportions test. (data must be coded as 0 and 1. Remember that prtest is a z test, and it requires np and nq to be above 5. prtest varname , by(groupvar) [level(#)] 2 sample proportions test different version of the above. prtesti #obs1 #p1 #obs2 #p2 [, level(#) count] Immeidate form of the above 2 sample tests, if count is specified, everything is expected to be specified in counts. bitest varname == # [, detail] (gives more exact p values than prtest but designed for only one tail) bitesti #N #succ #p [, detail] tab2 var1 var2, chi2 exact With chi2 or fisher exact test we only get a p value, no confidence interval. prtest requires certain assumption that involve a higher number of cases and higher totals in the group, so chi2 is very handy. tabi a b \ c d , chi2 row col Similar to above. cs casevar expovar [, or woolf exact] cs refers to cohort studies, it will test for differences in proportions in the form of relative risk. (As oposed to the previous commands that test the
absolute differences) (casevar is 0 for non case, 1 for case: expovar determines whether patient was exposed or not) csi #a #b #c #d [, csi_options] an immeidate form of the above command (where a is case exposed, b is case unexposed, c is noncase exposed, d is noncase unexposed) cc case_var expovar [weight] Stands for case control study. Will calculate OR cci #a #b #c #d Works in the same way as the csi in terms of how to in put the variables glm vary varx, link(identity) family(binomial) Beta coeff will be risk difference (like prtest). Intercept is the risk for unexposed. glm vary varx, link(logit) family (binomial) Beta coeff will be lnOR (like cc). If you add eform option, you should get the same coefficient as cc glm vary varx, link(log) family(binomial) Beta coeff will be lnRR. If you add the eform option you should get the same coefficient as cs logit vary varx (same as glm link logit) logistic vary varx (will report only OR, but not the constant) Testing differences in proportions: Paired data mcc var_exposed_case var_exposed_control [, tb level(#)] stands for matched case control data. Will result in McNemars test. tb will give the test based p values, although for research purposes only the CI should be used. mcci #a #b #c #d [, tb level(#)] a and b are values for the first row, cd and d for the second row. Row 1 and 2 correspond to cases exposed and cases non exposed. Column 1 and 2 correspond to cases exposed and non exposed. therefore b and c are discordant cells. (look at symmetry command and symmi for more functionality and expanding to more than 2x2)
ranksum varname, by(groupvar) [porder] this test is Mann Whitney or Wilcoxon signed rank test for unpaired data. porder can add exact for Fishers exact probability. If n less than 25-35 per group analyzed and non normal observations, you should use mann-whitney. Use the swilk and sfrancia to test for normality. median varname , by(groupvar) [median_options] Nonparametric equality-of-medians test. regress vary i.varx1 [robust] robust is very useful when there is heterogeneity of variance. vary is the outcome, a cont variable, and varx a categorical variable. pnorm more sensitive to deviation fro normality near the center of the distribution qnorm more sensitive to deviation from normality towards the extremes of the distribution. oneway vary varx, tab bonferroni this is a oneway anova. First check Bartletts test, if non significant variances are assumed similar across groups, proceed. If significant variances are assumed different, either transform variable, perform separate ttest for unequal variances and do bonferroni adjustment or use non parametric. (kruskall wallis) If anova model significant, then look at multiple comparisons data to determine with which comparison is significantly different. glm vary i.varx, family(normal) link(identity) same as anova using a linear model. regress vary i.varx regress uses t statistic, glm uses z statistics, therefore regress produces wider confidence intervals. Also same as anova. Testing differences in continuous variables: Paired data ttest var1 == var2 (paired ttest unless instructed otherwise) paired ttest. If small sample size perform swilk signrank varname = exp wilcoxon matched data test. in exp you could write a value for a one sample test or specify a different variable for a 2 sample test) signtest varname = exp alternative to signrank Graphs Sample Size Calculation Comparing Proportions Comparing Continuous Vars
double -8.9884656743*10^307 8.9884656743*10^307 +/10^-323 8 --------------------------------------------------------------------Precision for float is 3.795x10^-8. Precision for double is 1.414x10^-16.
Appendix
2:
Specifying
Format
for
Date-Time
vars
already
in
stata
format
of
either
date
or
time
Format | Implied (fully specified) format -------+-------------------------------| %tCDDmonCCYY_HH:MM:SS | %tcDDmonCCYY_HH:MM:SS | %td | %tdDDmonCCYY | %tw | %twCCYY!www %tm | %tmCCYY!mnn %th | %thCCYY!hh %ty | %tyCCYY ---------------------------------------Code Output ---------------------------------------------------------------CC century-1 01 99 cc century-1 1 99 YY 2-digit year 00 99 yy 2-digit year 0 99 JJJ 366 jjj 366 Mon month Feb, ..., Dec Month month January, February, ..., December mon month feb, ..., dec month month january, february, ..., december NN month 12 nn month 12 DD 31 dd 31 DAYNAME day of week Sunday, Monday, ... (aligned) day within month 1 day within month Jan, day within year 1 day within year 001 Meaning %tC %tc
jan,
01 1 01 -
Dayname day of week Sunday, Monday, ... (unaligned) Day day of week Mon, ... Da day of week Mo, ... day day of week mon, ... da day of week mo, ... h q WW 52 ww 52 HH 23 Hh 12 hH 23 hh 12 MM 59 mm 59 SS second 60 (sic, due to leap seconds) ss second 60 (sic, due to leap seconds) .s tenths .9 .ss hundredths .99 .sss thousandths - .999 am or or or or pm a.m. p.m. AM PM A.M. P.M. . , : _ / \ !c + display display display display display display display display period comma colon hyphen space slash backslash character show A.M. or P.M. show AM or PM show a.m. or p.m. show am or pm minute minute hour hour hour hour week half quarter week
---------------------------------------------------------------Note: + displays nothing; it may be used to separate one code from the next to make the format more readable. + is never necessary. For instance, %tchh:MM+am and %tchh:MMam have the same meaning, as does %tc+hh+:+MM+am. Code | Meaning ------+-------------------------------------M | month D | day within month Y | 4-digit year 19Y | 2-digit year to be interpreted as 19xx 20Y | 2-digit year to be interpreted as 20xx | h | hour of day m | minutes within hour s | seconds within minute | # | ignore one element ---------------------------------------------